Stacked Quantizers for Compositional Vector Compression

نویسندگان

  • Julieta Martinez
  • Holger H. Hoos
  • James J. Little
چکیده

Recently, Babenko and Lempitsky [3] introduced Additive Quantization (AQ), a generalization of Product Quantization (PQ) where a non-independent set of codebooks is used to compress vectors into small binary codes. Unfortunately, under this scheme encoding cannot be done independently in each codebook, and optimal encoding is an NP-hard problem. In this paper, we observe that PQ and AQ are both compositional quantizers that lie on the extremes of the codebook dependence-independence assumption, and explore an intermediate approach that exploits a hierarchical structure in the codebooks. This results in a method that achieves quantization error on par with or lower than AQ, while being several orders of magnitude faster. We perform a complexity analysis of PQ, AQ and our method, and evaluate our approach on standard benchmarks of SIFT and GIST descriptors, as well as on new datasets of features obtained from state-of-the-art convolutional neural networks.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

ECMCS ' 99 EURASIP Conference DSP for Multimedia

Vector quantization is an eecient compression technique for which many variants are known. Product code vector quantizers use multiple codebooks for coding separately features of a vector. In shape-gain and mean-shape-gain vector quantizers, the bottleneck in the encoder is a nearest neighbor search on a hypersphere. We deene an angular constraint for speeding up the search in shape-gain and me...

متن کامل

Robust vector quantization by competitive learning

Competitive neural networks can be used to e ciently quantize image and video data. We discuss a novel class of vector quantizers which perform noise robust data compression. The vector quantizers are trained to simultaneously compensate channel noise and code vector elimination noise. The training algorithm to estimate code vectors is derived by the maximum entropy principle in the spirit of d...

متن کامل

Necessary conditions for the optimality of variable-rate residual vector quantizers

-Residual vector quantizatlon (RVQ), or multistage VQ, as it is also called, has recently been shown to be a competitive technique for data compression [1]. The competitive performance of RVQ reported in [1] results from the joint optimization of variable rate encoding and RVQ direct-sum codebooks. In this paper, necessary conditions for the optimality of variable rate RVQs are derived, and an ...

متن کامل

Recursive partitioning to reduce distortion

Adaptive partitioning of a multidimensional feature space plays a fundamental role in the design of data-compression schemes. Most partition-based design methods operate in an iterative fashion, seeking to reduce distortion at each stage of their operation by implementing a linear split of a selected cell. The operation and eventual outcome of such methods is easily described in terms of binary...

متن کامل

Sample-adaptive product quantization: asymptotic analysis and examples

Vector quantization (VQ) is an efficient data compression technique for low bit rate applications. However, the major disadvantage of VQ is that its encoding complexity increases dramatically with bit rate and vector dimension. Even though one can use a modified VQ, such as the tree-structured VQ, to reduce the encoding complexity, it is practically infeasible to implement such a VQ at a high b...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:
  • CoRR

دوره abs/1411.2173  شماره 

صفحات  -

تاریخ انتشار 2014